Global convergence of Oja's PCA learning algorithm with a non-zero-approaching adaptive learning rate

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Stochastic PCA Algorithm with an Exponential Convergence Rate

We describe and analyze a simple algorithm for principal component analysis, SVR-PCA, which uses computationally cheap stochastic iterations, yet converges exponentially fast to the optimal solution. In contrast, existing algorithms suffer either from slow convergence, or computationally intensive iterations whose runtime scales with the data size. The algorithm builds on a recent variance-redu...

متن کامل

Speedy Q-Learning: A Computationally Efficient Reinforcement Learning Algorithm with a Near-Optimal Rate of Convergence∗

We consider the problem of model-free reinforcement learning (RL) in the Markovian decision processes (MDP) under the probably approximately correct (PAC) model. We introduce a new variant of Q-learning, called speedy Q-learning (SQL), to address the problem of the slow convergence in the standard Q-learning algorithm, and prove PAC bounds on the performance of this algorithm. The bounds indica...

متن کامل

Parameter Optimization Algorithm with Improved Convergence Properties for Adaptive Learning

The error in an artificial neural network is a function of adaptive parameters (weights and biases) that needs to be minimized. Research on adaptive learning usually focuses on gradient algorithms that employ problem–dependent heuristic learning parameters. This fact usually results in a trade–off between the convergence speed and the stability of the learning algorithm. The paper investigates ...

متن کامل

A Learning Algorithm for the BlindSeparation of Non - zero Skewness

Neural computational approach to blind sources separation was rst introduced by Jutten and Herault 6], and further developed by others 9, 3, 7, 4]. Necessary and suucient conditions for the blind sources separation have been proposed by Cardoso 1], Tong et al 10, 11], and Common 5]. There have been diiculties of implementing necessary and suucient conditions by a neural network with no spurious...

متن کامل

Convergence of Gradient Dynamics with a Variable Learning Rate

As multiagent environments become more prevalent we need to understand how this changes the agent-based paradigm. One aspect that is heavily affected by the presence of multiple agents is learning. Traditional learning algorithms have core assumptions, such as Markovian transitions, which are violated in these environments. Yet, understanding the behavior of learning algorithms in these domains...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Theoretical Computer Science

سال: 2006

ISSN: 0304-3975

DOI: 10.1016/j.tcs.2006.07.012